7 research outputs found

    Automated pick-up of suturing needles for robotic surgical assistance

    Get PDF
    Robot-assisted laparoscopic prostatectomy (RALP) is a treatment for prostate cancer that involves complete or nerve sparing removal prostate tissue that contains cancer. After removal the bladder neck is successively sutured directly with the urethra. The procedure is called urethrovesical anastomosis and is one of the most dexterity demanding tasks during RALP. Two suturing instruments and a pair of needles are used in combination to perform a running stitch during urethrovesical anastomosis. While robotic instruments provide enhanced dexterity to perform the anastomosis, it is still highly challenging and difficult to learn. In this paper, we presents a vision-guided needle grasping method for automatically grasping the needle that has been inserted into the patient prior to anastomosis. We aim to automatically grasp the suturing needle in a position that avoids hand-offs and immediately enables the start of suturing. The full grasping process can be broken down into: a needle detection algorithm; an approach phase where the surgical tool moves closer to the needle based on visual feedback; and a grasping phase through path planning based on observed surgical practice. Our experimental results show examples of successful autonomous grasping that has the potential to simplify and decrease the operational time in RALP by assisting a small component of urethrovesical anastomosis

    Identifying key mechanisms leading to visual recognition errors for missed colorectal polyps using eye-tracking technology

    Get PDF
    BACKGROUND AND AIMS: Lack of visual recognition of colorectal polyps may lead to interval cancers. The mechanisms contributing to perceptual variation, particularly for subtle and advanced colorectal neoplasia, has scarcely been investigated. We aimed to evaluate visual recognition errors and provide novel mechanistic insights. METHODS: Eleven participants (7 trainees, 4 medical students) evaluated images from the UCL polyp perception dataset, containing 25 polyps, using eye tracking equipment. Gaze errors were defined as those where the lesion was not observed according to eye tracking technology. Cognitive errors occurred when lesions were observed but not recognised as polyps by participants. A video study was also performed including 39 subtle polyps, where polyp recognition performance was compared with a convolutional neural network (CNN). RESULTS: Cognitive errors occurred more frequently than gaze errors overall (65.6%) , with a significantly higher proportion in trainees (P=0.0264). In the video validation, the CNN detected significantly more polyps than trainees and medical students, with per polyp sensitivities of 79.5%, 30.0% and 15.4% respectively. CONCLUSIONS: Cognitive errors were the most common reason for visual recognition errors. The impact of interventions such as artificial intelligence, particularly on different types of perceptual errors, needs further investigation including potential effects on learning curves. To facilitate future research, a publicly accessible visual perception colonoscopy polyp database was created

    Shape-from-Template

    No full text

    Refractive Structure-from-Motion Through a Flat Refractive Interface

    No full text
    © 2017 IEEE. Recovering 3D scene geometry from underwater images involves the Refractive Structure-from-Motion (RSfM) problem, where the image distortions caused by light refraction at the interface between different propagation media invalidates the single view point assumption. Direct use of the pinhole camera model in RSfM leads to inaccurate camera pose estimation and consequently drift. RSfM methods have been thoroughly studied for the case of a thick glass interface that assumes two refractive interfaces between the camera and the viewed scene. On the other hand, when the camera lens is in direct contact with the water, there is only one refractive interface. By explicitly considering a refractive interface, we develop a succinct derivation of the refractive fundamental matrix in the form of the generalised epipolar constraint for an axial camera. We use the refractive fundamental matrix to refine initial pose estimates obtained by assuming the pinhole model. This strategy allows us to robustly estimate underwater camera poses, where other methods suffer from poor noise-sensitivity. We also formulate a new four view constraint enforcing camera pose consistency along a video which leads us to a novel RSfM framework. For validation we use synthetic data to show the numerical properties of our method and we provide results on real data to demonstrate performance within laboratory settings and for applications in endoscopy.status: publishe

    A Continuum Robot and Control Interface for Surgical Assist in Fetoscopic Interventions

    No full text
    Twin-Twin Transfusion Syndrome (TTTS) requires interventional treatment using a fetoscopically introduced laser to sever the shared blood supply between the fetuses. This is a delicate procedure relying on small instrumentation with limited articulation to guide the laser tip and a narrow field of view to visualize all relevant vascular connections. In this paper, we report on a mechatronic design for a co-manipulated instrument that combines concentric tube actuation to a larger manipulator constrained by a remote centre of motion (RCM). A stereoscopic camera is mounted at the distal tip and used for imaging. Our mechanism provides enhanced dexterity and stability of the imaging device. We demonstrate that the imaging system can be used for computing geometry and enhancing the view at the operating site. Results using electromagnetic sensors for verification and comparison to visual odometry from the distal sensor show that our system is promising and can be developed further for multiple clinical needs in fetoscopic procedures.status: publishe

    Multimodal and Multimedia Image Analysis and Collaborative Networking for Digestive Endoscopy

    No full text
    International audienceObjective: The ultimate goal of the Syseo project is to create a chain of collaborative processes to allow the hepato-gastroenterology endoscopy specialist to manage images easily. Methods: Syseo contributes to several domains of computer science. First, the proposed storage system relies on DICOM, one of the most important medical standards. Results: Syseo consists in four main components: (1) a data management system relying on the well-known standard DICOM format; (2) a polyp ontology and description logics to manage gastroenterological images; (3) measuring tools to estimate the size of neoplasias from images and (4) pearly User Interfaces to enhance collaboration. Discussion: Preliminary results of Syseo are quite promising since the proposed solutions enable to efficiently store, annotate, retrieve medical data, while providing relatively accurate measuring tools for physicians and medical staff
    corecore